Goto

Collaborating Authors

 haberma machine


Generating Fair Consensus Statements with Social Choice on Token-Level MDPs

Blair, Carter, Larson, Kate

arXiv.org Artificial Intelligence

Current frameworks for consensus statement generation with large language models lack the inherent structure needed to provide provable fairness guarantees when aggregating diverse free-form opinions. We model the task as a multi-objective, token-level Markov Decision Process (MDP), where each objective corresponds to an agent's preference. Token-level rewards for each agent are derived from their policy (e.g., a personalized language model). This approach utilizes the finding that such policies implicitly define optimal Q-functions, providing a principled way to quantify rewards at each generation step without a value function (Rafailov et al., 2024). This MDP formulation creates a formal structure amenable to analysis using principles from social choice theory. We propose two approaches grounded in social choice theory. First, we propose a stochastic generation policy guaranteed to be in the ex-ante core, extending core stability concepts from voting theory to text generation. This policy is derived from an underlying distribution over complete statements that maximizes proportional fairness (Nash Welfare). Second, for generating a single statement, we target the maximization of egalitarian welfare using search algorithms within the MDP framework. Empirically, experiments using language models to instantiate agent policies show that search guided by the egalitarian objective generates consensus statements with improved worst-case agent alignment compared to baseline methods, including the Habermas Machine (Tessler et al., 2024).


AI mediation tool may help reduce culture war rifts, say researchers

The Guardian

Artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process, researchers claim. Experts say a system that can create group statements that reflect majority and minority views is able to help people find common ground. Prof Chris Summerfield, a co-author of the research from the University of Oxford, who worked at Google DeepMind at the time the study was conducted, said the AI tool could have multiple purposes. "What I would like to see it used for is to give political leaders in the UK a better sense of what people in the UK really think," he said, noting surveys gave only limited insights, while forums known as citizens' assemblies were often costly, logistically challenging and restricted in size. Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" – an AI system named after the German philosopher Jürgen Habermas. The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all.